Can Machines Think

My short answer is no.

Lenat and Marcus (2023) is Doug Lenat’s last paper. Gary Marcus wrote a eulogy and summary of why Cyc was a good idea that has been overtaken by other ideas for now, but remains an important lesson for the future:

Metaphorically, Lenat tried to find a path across the mountain of common sense, the millions of things we know about the world but rarely articulate. He didn’t fully succeed – we will need a different path – but he picked the critical mountain that we still must cross.

Doug Lenat1 spent most of his life writing a big symbolic reasoning-based system called Cyc. The idea was to manually encode every single relationship in the space of all ideas. It would “know” facts that we normally take as common sense, but that machines find difficult: that if you are married to somebody, they are also married to you; or if something is cold, it’s likely to make things around it cold as well.

The project was ambitious and he knew it, but eventually other technologies made it less relevant. When he started, there was no such thing as Wikipedia, for example – a huge, crowd-sourced compilation of all the world’s knowledge. Some of the “Big Data” achievements we now take for granted seemed impossibly complex until they were actually accomplished. And of course computers have become orders of magnitude faster and more capable.

References

Lenat, Doug, and Gary Marcus. 2023. “Getting from Generative AI to Trustworthy AI: What LLMs Might Learn from Cyc.” arXiv. http://arxiv.org/abs/2308.04445.

Footnotes

  1. Who was my teacher in college, and wrote my grad school recommendations↩︎